52 research outputs found

    Text to 3D Scene Generation with Rich Lexical Grounding

    Full text link
    The ability to map descriptions of scenes to 3D geometric representations has many applications in areas such as art, education, and robotics. However, prior work on the text to 3D scene generation task has used manually specified object categories and language that identifies them. We introduce a dataset of 3D scenes annotated with natural language descriptions and learn from this data how to ground textual descriptions to physical objects. Our method successfully grounds a variety of lexical terms to concrete referents, and we show quantitatively that our method improves 3D scene generation over previous work using purely rule-based methods. We evaluate the fidelity and plausibility of 3D scenes generated with our grounding approach through human judgments. To ease evaluation on this task, we also introduce an automated metric that strongly correlates with human judgments.Comment: 10 pages, 7 figures, 3 tables. To appear in ACL-IJCNLP 201

    Adversarial Learning for Neural Dialogue Generation

    Full text link
    In this paper, drawing intuition from the Turing test, we propose using adversarial training for open-domain dialogue generation: the system is trained to produce sequences that are indistinguishable from human-generated dialogue utterances. We cast the task as a reinforcement learning (RL) problem where we jointly train two systems, a generative model to produce response sequences, and a discriminator---analagous to the human evaluator in the Turing test--- to distinguish between the human-generated dialogues and the machine-generated ones. The outputs from the discriminator are then used as rewards for the generative model, pushing the system to generate dialogues that mostly resemble human dialogues. In addition to adversarial training we describe a model for adversarial {\em evaluation} that uses success in fooling an adversary as a dialogue evaluation metric, while avoiding a number of potential pitfalls. Experimental results on several metrics, including adversarial evaluation, demonstrate that the adversarially-trained system generates higher-quality responses than previous baselines

    Deep Reinforcement Learning for Dialogue Generation

    Full text link
    Recent neural models of dialogue generation offer great promise for generating responses for conversational agents, but tend to be shortsighted, predicting utterances one at a time while ignoring their influence on future outcomes. Modeling the future direction of a dialogue is crucial to generating coherent, interesting dialogues, a need which led traditional NLP models of dialogue to draw on reinforcement learning. In this paper, we show how to integrate these goals, applying deep reinforcement learning to model future reward in chatbot dialogue. The model simulates dialogues between two virtual agents, using policy gradient methods to reward sequences that display three useful conversational properties: informativity (non-repetitive turns), coherence, and ease of answering (related to forward-looking function). We evaluate our model on diversity, length as well as with human judges, showing that the proposed algorithm generates more interactive responses and manages to foster a more sustained conversation in dialogue simulation. This work marks a first step towards learning a neural conversational model based on the long-term success of dialogues

    Gene expression patterns in the hippocampus and amygdala of endogenous depression and chronic stress models

    Get PDF
    The etiology of depression is still poorly understood, but two major causative hypotheses have been put forth: the monoamine deficiency and the stress hypotheses of depression. We evaluate these hypotheses using animal models of endogenous depression and chronic stress. The endogenously depressed rat and its control strain were developed by bidirectional selective breeding from the Wistar–Kyoto (WKY) rat, an accepted model of major depressive disorder (MDD). The WKY More Immobile (WMI) substrain shows high immobility/despair-like behavior in the forced swim test (FST), while the control substrain, WKY Less Immobile (WLI), shows no depressive behavior in the FST. Chronic stress responses were investigated by using Brown Norway, Fischer 344, Lewis and WKY, genetically and behaviorally distinct strains of rats. Animals were either not stressed (NS) or exposed to chronic restraint stress (CRS). Genome-wide microarray analyses identified differentially expressed genes in hippocampi and amygdalae of the endogenous depression and the chronic stress models. No significant difference was observed in the expression of monoaminergic transmission-related genes in either model. Furthermore, very few genes showed overlapping changes in the WMI vs WLI and CRS vs NS comparisons, strongly suggesting divergence between endogenous depressive behavior- and chronic stress-related molecular mechanisms. Taken together, these results posit that although chronic stress may induce depressive behavior, its molecular underpinnings differ from those of endogenous depression in animals and possibly in humans, suggesting the need for different treatments. The identification of novel endogenous depression-related and chronic stress response genes suggests that unexplored molecular mechanisms could be targeted for the development of novel therapeutic agents

    Dog News

    No full text
    Reprinted from Dog News, February 193
    • …
    corecore